NetLogo banner

Home
Download
Help
Forum
Resources
Extensions
FAQ
NetLogo Publications
Contact Us
Donate

Models:
Library
Community
Modeling Commons

Beginners Interactive NetLogo Dictionary (BIND)
NetLogo Dictionary

User Manuals:
Web
Printable
Chinese
Czech
Farsi / Persian
Japanese
Spanish

  Donate

NetLogo User Community Models

(back to the NetLogo User Community Models)

[screen shot]

Download
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled because this model uses extensions.)

## WHAT IS IT?

This simulation compares how different types of agents (social conformers and norm detectors) converge on a particular action while interacting across multiple settings. Social conformers adopt the most popular action in a given situation. Norm detectors both observe the actions of other agents and also send and receive messages about those actions. Norm detectors recognize an action as a norm if and only if: (a) the observed compliance of an action (i.e. the % adopting the act in a social setting) exceeds their personal threshold, and (b) accumulated force of messages (i.e. the 'message strength') concerning that action exceeds 1. Once an action is regarded as a norm for a given social setting, a Norm Detector will adopt it regardless of what other agents are doing, although it is possible for Norm Detectors to have multiple norms for a given setting.

This is a replication of the model from chapter 7 entitled "Hunting for Norms in Unpredictable Societies" in Minding Norms: Mechanisms and Dynamics of Social Order in Agent Societies, Eds. Rosaria Conte, Giulia Andrighetto, Marco Campenni
2014, Oxford University Press.

## HOW IT WORKS

In this model there are two types of agents: SOCIAL CONFORMERS (SCs) and NORM DETECTORS (NDs). Agents interact in different situations or scenarios, determined by the parameter "settings." In the original model, there are 4 situations, each of which has 3 possible actions: 2 actions available (unique to) only that setting, and 1 action available in all settings. In this model, the number of unique actions and universal actions are set by the parameters "actions_per_setting" and "universal_actions," respectively.

Social conformers have no memory and adopt the most frequently chosen action by agents in their particular setting. Norm detectors have memory and select their action based on a salient norm (See below). All agents have the following attributes:

1. agenda = personal agenda (sequence of settings randomly chosen)
2. time_allocation = time of performance in each scenario
3. vision = window of observation (capacity for observing and interacting with fixed number of agents)
4. setting = which social setting the agent occupies at a given time; determines who the agent can interact with.
5. Threshold = between 0 and 1 (salience); "frequency of the corresponding normative behaviors observed; i.e. the percentage of the compliant population" (p. 100)

ND's receive input from TWO sources: BEHAVIORS and MESSAGES. Messages are directed links sent to and from other NDs with two attributes: (1) the content (WHAT is said)- i.e. the action of the sender (who communicates to other agents via message links *about* that action), and (2) the means of conveying this content (HOW it is communicated). The 'HOW' attribute refers to the strength of the message, labelled "m." Varying message strengths are supposed to simulate different forms of persuasion. The original text discusses ASSERTIONS; REQUESTS; DEONTICS (evaluations of as good/acceptable and bad/unaccaptable); VALUATIONS- assertions about what is right or wrong; and finally, DEONTICS: "Every time a message containing a deontic (D) is rceived ... or a normative valuation (V) ... it will directly access the second layer of the architecture, giving rise to a candidate normative belief" (p. 99). In this model, different kinds of normative messages are simulated by the varying strengths (HOWs) of those messages. We improve upon the original model by using continuous values rather than discrete values. For example, "assertions" are simulated by messages with low m values, whereas normative valuations are simulated by messages about actions with high m values.

ND ROUTINE:

I. Update Messages

1a. Send random out-message to another agent in the situation. Strength (HOW) is set between "forget" and 1. Forget = 1 / memory.

1b. Pick a random in-message (if any available), record the action (WHAT) and strength (HOW).

1c. Reset the 'Working Memory' matrix row2 (m, or message strength about each action). The update is right now produced by the following equation: "let new_m old_m ^ 2 + r2" ; which means the new 'strength' (i.e. salience or accumulation) of a message is equal to the previous strength of that action (0 to 1) squared plus the strength of the new message. For example, agent i receives a message from agent j while they are in situation 2 about action 23 (third action in situation 2). The strength (HOW) of this message is .3. If the previous strength for action 23 was .5, the new strength will be: .5^2+.3=.55. The new strengths are recorded in row2 (i.e. third row) of the working_memory matrix.

II. Setup Norm_Board: IF v > threshold AND m > 1, THEN store action as norm in "NORM_BOARD". Va = OBSERVED COMPLIANCE, i.e. the percentage of observed agents in situation s performing action a (row0/row1 of working_memory matrix). [See note in the 'details' about how OBSERVED COMPLIANCE is actually calcalated- there are many possibilities for this.] NEXT, if the strength (m, row2 of working_memorymatrix) of the messages for this action are greater than 1, then it automatically becomes a new norm. Thus, observing other agents perform an action is by itself insufficient to become a norm. Agents must also receive messages about these actions. Currently the threshold value for m (strength) is 1, but this should be varied. Right now, m is the accumulated history of 'HOW's pertaining to a particular action, which updates the working_memory matrix as explained in the step above.

III. FORGETTING (NDs). For NDs, in between setting up the norm_board and selecting an action is a procedure called "nds_action_forgetting." This reduces the strength of m (messages- row2 of working_memory matrix) over time by a constant factor. In the future, an exponential function may be implemented. Right now, m for every action is weakened by a factor of f, where f = 1/memory. So, if memory is set to 5, m for each action is weakened by .2 each tick. To compensate for this, reporter "m_strength" for new out-messages is set between f and 1. So, if memory is only 2, m is weakened by .5 each turn, but every new message received is also some number between .5 and 1.

IV. Choose action: If agent has a norm_board, agent chooses the salient action (for that situation) from its norm_board with the highest "m" (strength) value.

(5.) Forgetting. NDs have an "action_history" list which records their previous actions up to length "memory." If the length exceeds their memory, then the most distant action is removed. CURRENTLY, ACTION HISTORY IS JUST A RECORDING DEVICE AND HAS NO FUNCTIONALITY.

## DETAILS

I. HOW IS OBSERVED COMPLIANCE MEASURED?. In some ways, "salience" is presupposed, because only in certain settings as some actions possible. Thus, we presume that only some actions are "salient" given the social setting! Here 'salience' is given by the "frequency of the corresponding normative behaviors observed; i.e. the percentage of the compliant population" (p. 100) This is ambiguous. I can think of at least 4 ways that *observed compliance* might be modeled. First, we can use either absolute or relative frequencies of compliance- i.e. the threshold for agents may correspond to absolute numbers or to percentages. Second, we can use frequencies of compliance for agents in that given setting or for all agents across all settings- which changes things radically! Cross-tabulating yields 4 possibilities.

Here we opt for a 5th measure: the relative compliance observed in a situation over the entirety of the duration. In other words, we calculate TOTAL COMPLIANCE as follows: let c_a = agents compliant to action a, and n = total agents *in a given setting*. TC = [c_a (t0) + c_a (t1) + c_a (t2) ...] / n (t0) + n (t1) + n (t2) ...

Moreover, this can be done in two ways: (1) each agent observes only 1 at a time, and so the relative frequencies are updated by 1 each tick; or (2) each agent observes all at once all of the compliant agents for each action in a given setting, and then the relative frequencies are updated en mass, using the same procedure as before. We here choose 'one at a time' to keep the numbers low.

II. The statistic "most popular action" is a little misleading because it compares universal actions, which all turtles can perform, versus actions embedded in specific situations, which only a fraction of turtles can perform. For example, if you check to see what actions turtles adopt initially, more of them will adopt one of the "universal actions" (e.g. action 1) only because that option is available to all turtles, whereas the other actions are only available as possible actions for the number of turtles in that specific setting.

III. The actions are labeled in the "actions_m" matrix and "actions_l" list as follows: universal actions are labelled from 0 --> 9; actions unique to setting 1 are labelled from 11 to 19; actions unique to setting 2 are labelled 21 to 29; actions in setting 3, 31 to 39, etc. {Using matrices might have been an unnecessary complication, but it worked out nicely. Each ND has a matrix called "working_memory." The WORKING_MEMORY MATRIX has 3 rows and j columns. Each column represents a possible action (in that given setting). We keep track of which column (0 to j) and match it up with the position on the actions_m matrix and actions_l global lists. Row 0 of the "working_memory" matrix is the number of agents observed performing action j. Row 1 of the matrix is the total number of agents in that setting. Row 0 is the numerator, and Row 1 is the denominator. Row 0 / Row 1 = prnctg frequency of agents in a setting performing j action. The WM is always reset when changing social settings! Row 2 (i.e. 3rd row) of the WM matrix is the message strength row.

IV. Finally, the interpretation given to this model can be challenged. In the original model, NDs converge on the one 'universal action' available across all social situations. Authors cite “standing in line” as an example, or “answering when called upon." They are arguing that ND’s learn norms faster than SC’s, but the problem is obviously that they are just defining ‘norms’ as any action common to the multiple settings. In real life, *universal actions and situation-specific actions are not mutually exclusive, but rather presuppose each other.* In their model, if agents choose 'standing in line' the agents do so *instead* of doing whatever else they were going to do- i.e. agents stand in line instead of doing what they are waiting in line for, i.e. the action specified by the situation and which makes the situation unique in the first place. Put another way, if all agents choose the universal action in every setting, then there are no longer multiple settings!

## THINGS TO NOTICE
The main finding of this model is that agents capable of internalizing and memorizing salient social norms (norm "immergence" similar to second-order emergence or awareness) are better at converging on behaviors than are simple social conformers.

This works for the conditions established in the original study, in which there are 4 situations with 2 possible unique actions each and 1 universal action. But does this also hold true when there are many universal actions, zero universal actions, or many possible situation-specific actions? The results seem to critically depend upon the existence of 'universal actions' not specific to any particular situation.

## EXTENDING THE MODEL

PARTNER SELECTION:

Right now "messages" are randomly received- AGENTS COMMUNICATE TO OTHER randomly targeted agents their own actions at randomly varying strength (m). Agents do not select the partners with whom they communicate. Nor do agents communicate indirect, third-person information about others- rather, they only communicate their own actions. It would be fascinating to see what happens when action and communication are differentiated.

## NETLOGO FEATURES
Matrix.

## RELATED MODELS

(models in the NetLogo Models Library and elsewhere which are of related interest)

## CREDITS AND REFERENCES
"Hunting for Norms in Unpredictable Societies" in Minding Norms: Mechanisms and Dynamics of Social Order in Agent Societies, Eds. Rosaria Conte, Giulia Andrighetto, Marco Campenni, 2014, Oxford University Press.

(back to the NetLogo User Community Models)